![]() APPARATUS AND METHOD FOR ERROR SUPPRESSION IN LOW-DELAY UNIFIED SPEECH AND AUDIO CODING
专利摘要:
apparatus and method for error suppression in low delay unified speech and audio coding. an apparatus (100) for generating spectral reset values for an audio signal is provided. the apparatus (100) comprises a buffer unit (110) for storing previous spectral values relating to a previously received error-free audio frame. furthermore, the apparatus (100) comprises a suppression frame generator (120) for generating the spectral reset values when an actual audio frame is not received or is incorrect. the previously received error-free audio frame comprises filter information, the filter information having an associated filter stability value indicating a stability of a predictive filter. the blanking frame generator (120) is adapted to generate the spectral reset values based on the previous spectral values and based on the stability value of the filter. 公开号:BR112013020324B1 申请号:R112013020324-2 申请日:2012-02-13 公开日:2021-06-29 发明作者:Jérémie Lecomte;Martin Dietz;Michael Schnabel;Ralph Sperschneider 申请人:Fraunhofer - Gesellschaft Zur Forderung Der Angewandten Forschung E.V; IPC主号:
专利说明:
Description The present invention relates to the processing of audio signals and, in particular, to an apparatus and method for error suppression in Low-Delay Unified Speech and Audio Coding (LD-USAC | Low-Delay Unified Speech and Audio Coding ) . Processing of audio signals has advanced in many ways and has become increasingly important. In audio signal processing, Low Delay Unified Speech and Audio Coding aims to provide proper coding techniques for speech, audio and any mixtures of speech and audio. In addition, LD-USAC aims to ensure high quality for encoded audio signals. Compared to USAC {Unified Speech and Audio Coding | Unified Speech and Audio Coding), the delay in LD-USAC is reduced. When encoding audio data, an LD-USAC encoder analyzes the audio signal to be encoded. The LD-USAC encoder encodes the audio signal by encoding linear prediction filter coefficients of a prediction filter. Depending on the audio data to be encoded by a particular audio frame, the LD-USAC encoder decides whether ACELP (Advanced Code Excited Linear Prediction) is used to encode or whether the audio data should be encoded using the TCX {Transform Coded Excitation | Transformation Coded Excitation). While ACELP uses LP filter coefficients (linear prediction filter coefficients), adaptive code registration indices and algebraic code registration indices, and adaptive and algebraic code registration gains, TCX uses LP filter coefficients, parameters of energy and quantization indices for a Modified Discrete Cosine Transform (TCMD | Modified Discrete Cosine Transform). On the decoder side, the LD-USAC decoder determines whether ACELP or TCX was used to encode the audio data of an actual audio signal frame. The decoder then decodes the audio signal frame accordingly. Periodically, data transmission fails. For example, an audio signal frame transmitted by a sender arrives with errors at a receiver or does not arrive at all, or the frame has a delay. In these cases, error suppression may be necessary to ensure that missing or erroneous audio data can be overwritten. This is particularly true for applications with real-time requirements, as requesting an erroneous or missing frame retransmission can violate the low-delay requirements. However, existing suppression techniques used for other audio applications often create an artificial sound caused by synthetic artifacts. It is therefore an object of the present invention to provide improved concepts for error suppression for an audio signal frame. The object of the present invention is solved by an apparatus according to claim 1, by a method according to claim 15 and by a computer program according to claim 16. An apparatus for generating spectral reset values for an audio signal is provided. The apparatus comprises a memory unit for storing previous spectral values relating to a previously received error-free audio frame. In addition, the device has a suppression frame generator to generate spectral reset values when an actual audio frame has not been received or is erroneous. The previously received error-free audio frame comprises filter information, the filter information having an associated filter stability value indicating a stability of a predictive filter. The suppression frame generator is adapted to generate the spectral reset values based on the previous spectral values and based on the filter stability value. The present invention is based on the finding that, while the previous spectral values of a previously received error-free frame can be used for error suppression, a fade out must be performed on these values, and the fade must depend on of signal stability. The less stable a signal, the faster the decay must be conducted. In one application, the suppression frame generator can be adapted to generate the spectral reset values by randomly changing the sign of the previous spectral values. According to an additional application, the suppression frame generator can be configured to generate the spectral reset values by multiplying each of the previous spectral values by a first gain factor when the filter stability value has a first value and multiplying each one of the previous spectral values by a second gain factor that is less than the first gain factor, when the filter stability value has a second value that is less than the first value. In a further application, the suppression frame generator can be adapted to generate spectral reset values based on the filter stability value, wherein the previously received error-free audio frame comprises the coefficients of the first predictive filter, in that a preceding frame of the previously received error-free audio frame comprises coefficients of the second predictive filter, and wherein the filter stability value depends on the coefficients of the first predictive filter and the coefficients of the second predictive filter. According to an application, the suppression frame generator can be adapted to determine the filter stability value based on the coefficients of the first predictive filter of the previously received error-free audio frame and based on the coefficients of the second predictive filter of the predecessor frame of the previously received error-free frame. In an additional application, the suppression frame generator can be adapted to generate spectral reset values based on the filter stability value, characterized in that the filter stability value depends on a measured distance LSFdlst and where the measured distance LSFdist is defined by the following formula: where u+1 specifies a total number of the coefficients of the first predictive filter of the previously received error-free audio frame, and where u+1 also specifies a total number of the coefficients of the second predictive filter of the predecessor frame of the previously received audio frame received without error, where ft specifies the i-th filter coefficient of the first predictive filter coefficients and where it specifies the i-th filter coefficient of the second predictive filter coefficients. According to an application, the suppression frame generator can be adapted to generate the spectral reset values, also based on frame class information relating to the previously received error-free audio frame. For example, the frame class information indicates that the previously received error-free audio frame is classified as "artificial start", "start", "voice transition", "voiceless transition", "no voice" or " with voice". In an additional application, the suppression frame generator can be adapted to generate the spectral reset values also based on a series of consecutive frames that did not reach a receiver or were incorrect as a last free audio frame. error-free audio frame has arrived at the receiver, characterized in that other error-free audio frames arrive at the receiver once the last error-free audio frame has arrived at the receiver. According to another application, the suppression frame generator can be adapted to calculate a weakening factor and based on the filter stability value and based on the number of consecutive frames that do not reach the receiver, or that are erroneous. In addition, the suppression frame generator can be adapted to generate the spectral reset values by multiplying the fade factor by at least some of the above spectral values, or at least some of the values from a group of intermediate values , characterized in that each of the intermediate values depends on at least one of the previous spectral values. In a further application, the suppression frame generator can be adapted to generate the spectral reset values based on the previous spectral values, based on the filter stability value and also based on the predictive gain of a temporal noise modelling. According to an additional application, an audio signal decoder is provided. The audio signal decoder may comprise an apparatus for decoding spectral audio signal values and an apparatus for generating spectral reset values according to one of the applications described above. The apparatus for decoding spectral audio signal values can be adapted to decode the spectral values of an audio signal based on a previously received error-free audio frame. Furthermore, the apparatus for decoding spectral audio signal values can also be adapted to store the spectral values of the audio signal in the memory unit of the apparatus to generate spectral reset values. The apparatus for generating spectral reset values can be adapted to generate spectral reset values based on the spectral values stored in the memory unit, when the current audio frame is not received or is erroneous. Furthermore, an audio signal decoder according to another application is provided. The audio signal decoder comprises a decoding unit for generating first intermediate spectral values on the basis of an error-free received audio frame, a temporal noise shaping unit for conducting temporal noise shaping on the first intermediate spectral values for obtain a second group of intermediate spectral values, a predictive gain calculator to calculate a predictive gain of temporal noise modeling depending on the first intermediate spectral values and depending on the second group of intermediate spectral values, an apparatus according to one of the applications described above to generate spectral reset values when a current audio frame is not received or is erroneous, and a value selector to store the first intermediate spectral values in the device's storage unit to generate spectral reset values if the prediction gain is greater than or equal to a threshold value, or to store the second set of intermediate spectral values in the buffer unit of the apparatus to generate spectral reset values, if the prediction gain is less than the threshold value. In addition, another audio signal decoder is provided according to an additional application. The audio signal decoder comprises a first decoding module for generating spectral values based on an error-free received audio frame, an apparatus for generating spectral reset values according to one of the applications described above, a module processing for processing the spectral values generated by conducting temporal noise shaping, applying noise filling and/or applying a global gain to obtain audio spectral values from the decoded audio signal. The apparatus for generating spectral reset values can be adapted to generate the spectral reset values to feed these to the processing module when a current frame has not been received or is erroneous. Preferred applications will be provided in the dependent claims. In the following preferred applications, the present invention will be described in relation to the figures, in which: Figure 1 illustrates an apparatus for obtaining spectral reset values for an audio signal according to an application; Figure 2 illustrates an apparatus for obtaining spectral reset values for an audio signal according to a further application; Figure 3a - 3c illustrate the multiplication of a gain factor and previous spectral values according to an application; Figure 4a illustrates the repetition of a signal portion comprising a start in a time domain; Figure 4b illustrates the repetition of a stable signal portion in a time domain; Figure 5a - 5b illustrate examples, in which the generated gain factors are applied to the spectral values of Figure 3a, according to an application; Figure 6 illustrates an audio signal decoder according to an application; Figure 7 illustrates an audio signal decoder according to another application, and Figure 8 illustrates an audio signal decoder according to an additional application. Figure 1 illustrates an apparatus 100 for generating spectral reset values for an audio signal. Apparatus 100 comprises a storage unit 110 for storing prior spectral values relating to a previously received error-free audio frame. In addition, apparatus 100 comprises a suppression frame generator to generate spectral reset values when an actual audio frame has not been received or is erroneous. The previously received error-free audio frame comprises filter information, the filter information having an associated filter stability value indicating a stability of a predictive filter. The blanking frame generator 120 is adapted to generate the spectral reset values based on the previous spectral values and based on the stability value of the filter. The previously received error-free audio frame may, for example, comprise the previous spectral values. For example, the above spectral values can be contained in the previously received error-free audio frame in an encoded form. Or the above spectral values can, for example, be values that may have been generated by modifying values comprised in the previously received error-free audio frame, for example, spectral values of the audio signal. For example, the values comprised in the previously received error-free audio frame may have been modified by multiplying each of them by a gain factor to obtain the previous spectral values. Or the above spectral values may, for example, be values that may have been generated based on values comprised in the previously received error-free audio frame. For example, each of the above spectral values may have been generated by using at least some of the values comprised in the previously received error-free audio frame, so that each of the above spectral values depends on at least some of the values comprised in the previously received error-free audio frame. For example, the values comprised in the previously received error-free audio frame may have been used to generate an intermediate signal. For example, the spectral values of the generated intermediate signal can then be considered as the previous spectral values related to the previously received error-free audio frame. Arrow 105 indicates that the previous spectral values are stored in the memory unit 110. Suppression frame generator 120 can generate spectral reset values when an actual audio frame has not been received in time or is erroneous. For example, a transmitter may transmit a current audio frame to a receiver, where apparatus 100 for obtaining spectral reset values may, for example, be located. However, the current audio frame does not reach the receiver, for example, because of any kind of transmission error. Or the current transmitted audio frame is received by the receiver, but for example due to a disturbance, for example during transmission, the current audio frame is erroneous. In these or other cases, suppression frame generator 120 is required for error suppression. To do this, suppression frame generator 120 is adapted to generate spectral reset values based on at least some of the previous spectral values when a current audio frame has not been received or is erroneous. According to the applications, the previously received error-free audio frame is assumed to comprise filter information, the filter information having an associated filter stability value indicating a stability of a predictive filter defined by the filter information. For example, the audio frame may comprise predictive filter coefficients, e.g. linear prediction filter coefficients, as filter information. The blanking frame generator 120 is further adapted to generate the spectral reset values based on the previous spectral values and based on the filter stability value. For example, spectral reset values can be generated based on the previous spectral values and based on the filter stability value where each of the previous spectral values is multiplied by a gain factor, where the gain factor value depends on the stability value of the filter. For example, the gain factor can be smaller in a second case compared to a first case, when the filter stability value in the second case is smaller than in the first case. According to another application, spectral reset values can be generated based on previous spectral values and based on the filter stability value. Intermediate values can be generated by modifying the previous spectral values, for example, by randomly inverting the sign of the previous spectral values and by multiplying each of the intermediate values by a gain factor, where the gain factor value depends on the stability value of the filter. For example, the gain factor may be smaller in a second case than in a first case, when the filter stability value in the second case is smaller than in the first case. According to a further application, the above spectral values can be used to generate an intermediate signal, and a spectral domain synthesis signal can be generated by applying a linear prediction filter on the intermediate signal. Then, each spectral value of the generated synthesis signal can be multiplied by a gain factor, where the gain factor value depends on the filter stability value. As above, the gain factor can be, for example, smaller in a second case than in a first case if the filter stability value in the second case is smaller than in the first case. A particular application depicted in Figure 2 is now explained in detail. A first frame 101 arrives at the receiver side, where an apparatus 100 for obtaining spectral reset values can be located. On the receiver side, it is checked whether the audio frame is error free or not. For example, an error-free audio frame is an audio frame where all audio data contained in the audio frame is error-free. To this end, means (not shown) can be employed on the receiver side which determine whether a received frame is error free or not. To this end, "state of the art" error recognition techniques can be employed, such as a means of testing whether the received audio data is consistent with a received check bit or a received checksum. Or the error detection means may employ a cyclic redundancy check (CRC) to test whether the received audio data is consistent with a received CRC value. Any other testing technique, whether a received audio frame is error free or not, can also be used. The first audio frame 101 comprises audio data 102. Furthermore, the first audio frame comprises check data 103. For example, the check data can be a check bit, a checksum or a CRC value, which it can be employed on the receiver side to test whether the received audio frame 101 is error free (it is an error free frame) or not. If it has been determined that the audio frame 101 is error free, then the values relative to the error free audio frame, e.g. audio data 102, will be stored in the memory unit 110 as "previous spectral values". These values can, for example, be spectral values of the audio signal encoded in the audio frame. Or, the values that are stored in the memory unit can be, for example, the intermediate values resulting from the modified or processing encoded values stored in the audio frame. Alternatively, a signal, for example a synthesis signal in the spectral domain, can be generated on the basis of encoded values of the audio frame, and the spectral values of the generated signal can be stored in the memory unit 110. Storing the spectral values in the storage unit 110 is indicated by arrow 105. Furthermore, the audio data 102 of the audio frame 101 is used at the receiver side to decode the encoded audio signal (not shown). The part of the audio signal that is decoded can then be repeated on the receiver side. Thereafter, after processing the audio frame 101, the receiver side waits for the next audio frame 111 (which also includes audio data 112 and check data 113) to arrive at the receiver side. However, for example, although audio frame 111 is transmitted (as shown in 115), something unexpected occurs. This is illustrated at 116. For example, a link may be disturbed in such a way that the bits of audio frame 111 may be unintentionally changed during transmission, or, for example, audio frame 111 may not arrive at all. on the receiving side. In such a situation, a suppression is needed. When, for example, an audio signal is reproduced on the receiver side that is generated based on the received audio frame, techniques that mask a lost frame should be used. For example, the concepts should define what to do, when a current audio frame of an audio signal that is needed to play again does not reach the receiver side or is erroneous. The suppression frame generator 120 is adapted to provide error suppression. In Figure 2, suppression frame generator 120 is informed that a current frame has not been received or is erroneous. On the receiver side, means (not shown) may be used to indicate to suppression frame generator 120 that suppression is required (this is shown by dashed arrow 117). To perform error suppression, suppression frame generator 120 may request some or all of the previous spectral values, for example, the previous audio values, relative to the previously received error-free frame 101 from the memory unit 110. This request is illustrated by arrow 118. As in the example of Figure 2, the previously received error-free frame may be, for example, the last received error-free frame, e.g., audio frame 101. However, a free frame Different error messages can also be employed on the receiver side as the previously received error-free frame. The suppression frame generator then receives (some or all) the previous spectral values relative to the previously received error-free audio frame (e.g., audio frame 101) from the storage unit 110, as shown at 119. For example, in case of multiple lost frames, the storage is fully or partially updated. In one application, the steps illustrated by arrows 118 and 119 can be performed as characterized by suppression frame generator 120 loads previous spectral values from storage unit 110. The blanking frame generator 120 then generates spectral reset values based on at least some of the previous spectral values. Therefore, the listener must not become aware that one or more audio frames are missing, such that the impression of sound created by the reproduction is not disturbed. A simple way to achieve suppression would be to simply use the values, for example, the spectral values of the last error-free frame as spectral replacement values for the current missing or erroneous frame. However, there are particular problems, especially in the case of beginnings, for example, when the sound volume changes significantly suddenly. For example, in the case of a noise surge, simply repeating the previous spectral values of the last frame, the noise surge would also be repeated. In contrast, if the audio signal is quite stable, for example, its volume does not change significantly, or, for example, its spectral values do not change significantly, then the effect of artificially generating the signal portion of actual audio based on previously received audio data, for example repeating the previously received audio signal portion, would be less disruptive to a listener. Applications are based on this discovery. The blank frame generator 120 generates spectral reset values based on at least some of the above spectral values and based on the filter stability value, indicating a stability of a predictive filter related to the audio signal. Thus, suppression frame generator 120 considers the stability of the audio signal, for example, the stability of the audio signal relative to the previously received error-free frame. To do this, suppression frame generator 120 can change the value of a gain factor that is applied over previous spectral values. For example, each of the above spectral values is multiplied by the gain factor. This is illustrated in Figures 3a - 3c. In Figure 3a, some of the spectral lines of an audio signal relative to a previously received error-free frame are illustrated before an original gain factor is applied. For example, the original gain factor could be a gain factor that is transmitted in the audio frame. On the receiver side, if the received frame is error free, the decoder can, for example, be configured to multiply each of the spectral values of the audio signal by the original gain factor g to obtain a modified spectrum. This is shown in Figure 3b. In Figure 3b, the spectral lines that result from multiplying the spectral lines in Figure 3a by a gain factor are represented. For reasons of simplicity, the original gain factor g is assumed to be 2.0. (g = 2.0). Figures 3a and 3b illustrate a scenario where no suppression was needed. In Figure 3c a scenario is considered, where a current frame has neither been received nor is it erroneous. In such a case, replacement vectors have to be generated. Therefore, the previous spectral values relative to the previously received error-free frame which have been stored in the memory unit can be used to generate the spectral reset values. In the example in Figure 3c, it is assumed that the spectral reset values are generated based on the received values, but the original gain factor is modified. A different, smaller gain factor is used to generate the spectral reset values than the gain factor that is used to amplify the received values in the case of Figure 3b. Hence, a weakening is achieved. For example, the modified gain factor used in the scenario illustrated in Figure 3c might be 75% of the original gain factor, for example, 0.75 • 2.0 = 1.5. By multiplying each of the spectral values by the modified (reduced) gain factor, a weakening is conducted, as the modified gain factor cat = 1.5 which is used for the multiplication of each of the spectral values is less than the factor. of initial gain (gain factor gprev = 2.0) used for the multiplication of spectral values, in the error-free case. The present invention is, inter alia, based on the finding that repeating the values of a previously received error-free frame is perceived as more disturbing when the respective portion of audio signal is unstable, then in the case when the respective audio signal portion is stable. This is illustrated in Figures 4a and 4b. For example, if the previously received error-free frame comprises a beginning, then the beginning is reproducible. Figure 4 illustrates a portion of an audio signal, characterized in that a transient occurs in the portion of the audio signal associated with the error-free last received frame. In Figures 4a and 4b, the abscissa axis indicates time, the ordinate indicates an amplitude value of the audio signal. The signal portion specified by 410 relates to the audio signal portion relative to the last error-free received frame. The dashed line in area 420 indicates an eventual continuation of the curve in the time domain, if the values relative to the previously received error-free frame are simply copied and used as spectral reset values of a reset frame. As can be seen the transient is likely to be repeated which may be perceived as disturbing by the listener. In contrast, Figure 4b illustrates an example, where the signal is quite stable. In Figure 4b, a portion of the audio signal relative to the last frame received without error is illustrated. In the signal portion of Figure 4b, no transient occurred. Again, the abscissa axis indicates time, the ordinate indicates an audio signal amplitude. Area 430 refers to the portion of the signal associated with the last error-free frame received. The dashed line in area 440 indicates a possible continuation of the curve in the time domain, if the values of the previously received error-free frame are copied and used as spectral reset values of a reset frame. In such situations where the audio signal is quite stable, repeating the last portion of the signal seems to be more acceptable to a listener than in the situation where a start is repeated, as illustrated in Figure 4a. The present invention is based on the discovery that spectral reset values can be generated based on previously received values of a previous audio frame, but also the stability of the predictive filter depending on the stability of a portion of the audio signal. must be considered. For this, a filter stability value must be taken into account. The filter stability value can, for example, indicate the stability of the predictive filter. In LD-USAC, predictive filter coefficients, e.g. linear prediction filter coefficients, can be determined on the encoder side and can be transmitted to the receiver within the audio frame. On the decoder side, the decoder then receives the prediction filter coefficients, for example, the prediction filter coefficients of the previously received error-free frame. On the other hand, the decoder may have already received the predictive filter coefficients from the predecessor frame of the previously received frame, and may, for example, have stored these predictive filter coefficients. The preceding frame of the previously received error-free frame is the frame immediately preceding the previously received error-free frame. The suppression frame generator can then determine the filter stability value based on the predictive filter coefficients of the previously received error-free frame and based on the predictive filter coefficients of the predecessor frame of the previously received error-free frame. In the following, the determination of the filter stability value according to an application is presented, which is particularly suitable for LD-USAC. The stability value considered depends on predictive filter coefficients, for example 10 predictive filter coefficients f in the narrowband case, or for example 16 predictive filter coefficients in the wideband case, which may have been transmitted in a previously received error-free frame. In addition, the prediction filter coefficients of the previous frame to the previously received error-free frame are also considered, for example, other 10 prediction filter coefficients ^(p) in the narrowband case (or, for example, other 16 predictive filter coefficients /j(p) in broadband case) . For example, the k-th prediction filter can be calculated on the encoder side by calculating an autocorrelation such that: characterized in that s' is a voice signal "window", for example the voice signal which is to be encoded after a window has been applied to the voice signal t. It can be, for example, 383. Alternatively, t can have other values, such as 191 or 95. In other applications, instead of an autocorrelation calculus, the Levinson-Durbin algorithm, known from the state of the art, can be used alternatively, see, for example,[3]: 3GPP, "Speech codec speech processing functions; Adaptive Multi-Rate - Wideband (AMR-WB) speech codec; Transcoding functions", 2009, V9.0.0, 3GPP TS 26.190. As already said, the prediction filter coefficients and /J(p) may have been transmitted to the receiver within the previously received error-free frame and the predecessor of the previously received error-free frame, respectively. On the decoder side, a distance measure of the Line Spectral Frequency (LSF I Line Spectral Frequency distance measure) LSFdist can then be calculated using the formula: u can be the number of prediction filters in the previously received error-free frame minus 1. For example, if the previously received error-free frame had 10 prediction filter coefficients, then, for example, u = 9. The number of Predictive filter coefficients in the previously received error-free frame is typically identical to the number of predictive filter coefficients in the predecessor frame of the previously received error-free frame. The stability value can then be calculated according to the formula: v can be an integer. For example, v can be 156250 in the case of narrowband. In an additional application, v can be 400000 in the broadband case. O is considered to indicate a very stable prediction filter if θ is 1 or close to 1. O is considered to indicate a very unstable prediction filter if θ is 0 or close to 0. The suppression frame generator can be adapted to generate spectral reset values based on previous spectral values of a previously received error-free frame, when a current audio frame is not received or is erroneous. Furthermore, the suppression frame generator can be adapted to calculate a stability value θ based on the prediction filter coefficients f of the previously received error-free frame and also based on the prediction filter coefficients of the previously error-free frame. received, as described above. In one application, the suppression frame generator can be adapted to use the filter stability value to generate a generated gain factor, eg modify an original gain factor, and apply the generated gain factor to previous spectral values relative to the audio frame to obtain the spectral reset values. In other applications, the suppression frame generator is adapted to apply the generated gain factor to values derived from previous spectral values. For example, the suppression frame generator can generate the modified gain factor by multiplying a received gain factor by a decay factor, characterized in that the decay factor depends on the filter stability value. Let's, for example, consider that a gain factor received in an audio signal frame has, for example, the value 2.0. The gain factor is typically used to multiply previous spectral values to obtain modified spectral values. To apply a weakening, a modified gain factor is generated which depends on the stability value θ. For example, if the stability value θ = 1, then the prediction filter is considered to be very stable. The fading factor can then be set to 0.85 if the frame that is to be reconstructed is the first missing frame. Thus, the modified gain factor is 0.85 • 2.0 = 1.7. Each of the received spectral values from the previously received frame is then multiplied by a modified gain factor of 1.7 instead of 2.0 (the received gain factor) to generate the spectral reset values. Figure 5a shows an example, in which a generated gain factor of 1.7 is applied to the spectral values of Figure 3a. However, if, for example, the stability value is 0=0, then the prediction filter is considered to be very unstable. The fading factor can then be set to 0.65 if the frame that is to be reconstructed is the first missing frame. Thus, the modified gain factor is 0.65 • 2.0 = 1.3. Each of the received spectral values from the previously received frame is then multiplied by a modified gain factor of 1.3 instead of 2.0 (the received gain factor) to generate the spectral reset values. Figure 5b illustrates an example, in which a generated gain factor 1.3 is applied to the spectral values of Figure 3a. As the gain factor in the example in Figure 5b is smaller than in the example in Figure 5a, the magnitudes in Figure 5b are also smaller than in the example in fig. 5th. Different strategies can be applied according to the value 0, where 0 can be any value between 0 and 1. For example, a value of 0 b 0.5 can be interpreted as 1 such that the weakening factor has the same value as if 0 were 1, for example, the weakening factor is 0.85. A value of 0 <0.5 can be interpreted as 0 such that the decay factor has the same value as if 0 were 0, for example the decay factor is 0.65. According to another application, the weakening factor value can alternatively be interpolated if the value of θ is between 0 and 1. For example, considering that the weakening factor value is 0.85 if θ is 1, and 0.65 if 0 is 0, then the weakening factor can be calculated according to the formula: weakening_factor = 0.65+θ-0.2; to 0<θ<l. In a further application, the suppression frame generator is adapted to generate the spectral reset values further on the basis of frame class information relating to the previously received error-free frame. Class information can be determined by an encoder. The encoder can then encode the frame class information into the audio frame. The decoder can then decode the frame class information by decoding the previously received error-free frame. Alternatively, the decoder can determine the frame class information itself by examining the audio frame. Furthermore, the decoder can be configured to determine the frame class information based on information from the encoder and based on an analysis of the received audio data, the examination being conducted by the decoder itself. The frame class can, for example, indicate whether the frame is classified as "artificial start", "start", "voice transition", "voiceless transition", "voiceless" and "voiceless". For example, "beginning" may indicate that the previously received audio frame comprises a beginning. For example, "voice" may indicate that the previously received audio frame comprises voice data. For example, "no voice" may indicate that the previously received audio frame comprises unvoiced data. For example, "voice transition" may indicate that the previously received audio frame comprises voice data, but that, compared to the predecessor of the previously received audio frame, the pitch has changed. For example, "artificial start" may indicate that the energy of the previously received audio frame has been improved (thus, for example, creating an artificial start). For example, "non-voice transition" may indicate that the previously received frame includes non-voiced data, but that the voiceless sound is close to a change. Depending on the previously received audio frame, the stability value θ and the number of successive erased frames, the attenuation gain, eg the decay factor, can, for example, be set as follows: According to an application, the suppression frame generator can generate a modified gain factor by multiplying a received gain factor by the decay factor determined based on the filter stability value and frame class. Then the previous spectral values can, for example, be multiplied by the modified gain factor to obtain spectral reset values. The suppression frame generator can be re-adapted to generate the spectral reset values also based on frame class information. According to an application, the suppression frame generator can be adapted to generate the spectral reset values also depending on the number of consecutive frames that do not reach the receiver or that are incorrect. In one application, the suppression frame generator can be adapted to calculate a fade factor based on the filter stability value and based on the number of consecutive frames that do not reach the receiver, or that are incorrect. The suppression frame generator can furthermore be adapted to generate the spectral reset values by multiplying the fade factor by at least some of the above spectral values. As an alternative, the suppression frame generator can be adapted to generate the spectral reset values by multiplying the fade factor by at least some values from a group of intermediate values. Each of the intermediate values depends on at least one of the previous spectral values. For example, the group of intermediate values could have been generated by changing the previous spectral values. Or, a synthesis signal in the spectral domain may have been generated based on previous spectral values, and the spectral values of the synthesis signal may form the group of intermediate values. In a further addition, the weakening factor can be multiplied by an original gain factor to obtain a generated gain factor. The generated gain factor is then multiplied by at least some of the above spectral values, or by at least some of the values from the group of intermediate values mentioned above, to obtain the spectral reset values. The fade factor value depends on the filter stability value and the number of consecutive lost or wrong frames, and can, for example, have the following values: Here, "Number of consecutive missing / incorrect frames = 1" indicates that the immediate predecessor of the missing / incorrect frame was error free. As can be seen from the example above, the fade factor can be updated each time a frame does not arrive or is incorrect based on the last fade factor. For example, if the immediate predecessor of a missing/incorrect frame is error free, then in the example above the fade factor is 0.8. If the subsequent frame is also missing or incorrect, the decay factor is updated based on the previous decay factor by multiplying the previous decay factor by an update factor of 0.65: decay factor =0.8 0, 65 = 0.52, and so on. Some or all of the above spectral values can be multiplied by the decay factor itself. Alternatively, the weakening factor can be multiplied by an original gain factor to obtain a generated gain factor. The generated gain factor can then be multiplied by each (or some) of the previous spectral values (or intermediate values derived from the previous spectral values) to obtain the spectral reset values. It should be noted that the weakening factor may also depend on the filter stability value. For example, the table above may also comprise definitions for the weakening factor if the filter stability value is 1.0, 0.5 or any other value, for example: Weakening factor values for intermediate filter stability values can be approximated. In an additional application, the fading factor can be determined by using a formula that calculates the fading factor based on the filter stability value and based on the number of consecutive frames that do not reach the receiver, or that are incorrect. As described above, previous spectral values stored in the memory unit can be spectral values. To prevent disturbing artifacts from being generated, the suppression frame generator can, as explained above, generate the spectral reset values based on a filter stability value. However, said reset of the generated signal portion may still have a repetitive character. Therefore, according to an application, it is also proposed to modify the previous spectral values, for example, the spectral values of the previously received frame, by randomly inverting the sign of the spectral values. For example, the suppression frame generator randomly decides for each of the previous spectral values, whether the sign of the spectral value is inverted or not, for example, whether the spectral value is multiplied by -1 or not. Thereby, the repetitive character of the replaced audio signal frame with respect to its predecessor frame is reduced. In the following, a suppression in an LD-USAC decoder according to an application is described. In this application, suppression works on the spectral data, just before the LD-USAC decoder conducts the frequency-to-time conversion. In such an application, the values of an incoming audio frame are used to decode the encoded audio signal by generating a synthesis signal in the spectral domain. For this, an intermediate signal in the spectral domain is generated based on the values of the incoming audio frame. Noise filling is conducted on values quantified to zero. The encoded coefficients of the predictive filter define a prediction filter which is then applied over the intermediate signal to generate the synthesis signal representing the frequency-domain decoded/reconstructed audio signal. Figure 6 illustrates an audio signal decoder according to an application. The audio signal decoder comprises an apparatus for decoding spectral audio signal values 610, and an apparatus for generating spectral reset values 620 according to one of the applications described above. The apparatus for decoding the values of the spectral audio signal 610 generates the spectral values of the decoded audio signal as described above when an error-free audio frame arrives. In the application of Figure 6, the spectral values of the synthesis signal can then be stored in a memory unit 620 of the apparatus for generating spectral reset values. These spectral values of the decoded audio signal are decoded on the basis of the received error-free audio frame, and therefore are related to the previously received error-free audio frame. When a current frame is missing or incorrect, the 620 generating spectral reset values is informed that spectral reset values are required. The blanking frame generator of apparatus 620 for generating spectral reset values then generates spectral reset values in accordance with one of the applications described above. For example, the spectral values of the last good frame are slightly modified by the suppression frame generator randomly inverting its signal. Then a fade is applied to these spectral values. Fading may depend on the stability of the previous prediction filter and the number of consecutive frames lost. The generated spectral reset values are then used as spectral reset values for the audio signal, and then a frequency to time transformation is conducted to obtain a time domain audio signal. In LD-USAC, as well as in USAC and MPEG-4 (MPEG = Moving Picture Experts Group | Moving Picture Experts Group), temporal noise modeling (TNS | Temporal Noise Shaping) can be employed. By temporal noise modeling, the noise time structure is well controlled. On the decoder side, a filter operation is applied on the spectral data based on the noise shaping information. More information on temporal noise modeling can, for example, be found in:[4]: ISO/IEC 14496-3:2005: Information technology- Coding of audio-visual objects - Part 3: Audio, 2005. The applications are based on the finding that, in the case of a start / transient, the TNS is highly active. Thus, by determining whether the TNS is highly active or not, one can estimate whether an onset / transient is present. According to one application, a prediction gain that TNS has is calculated on the receiver side. On the receiver side, at first, the spectral values received from a received error-free audio frame are processed to obtain first intermediate spectral values (aj . Then the TNS is conducted and, with this, a second set of Intermediate spectral values (bj is obtained. A first E2 energy value is calculated for the first intermediate spectral values and a second E2 energy value is calculated for the second set of intermediate spectral values. To obtain the TNS gTNs prediction gain, the second energy value can be divided by the first energy value. For example, gTNS can be set to: (N = number of spectral values considered) According to one application, the suppression frame generator is adapted to generate the spectral reset values based on the previous spectral values, based on the filter stability value and also based on the temporal noise modeling prediction gain, when temporal noise modeling is conducted on a previously received error-free frame. According to another application, the suppression frame generator is adapted to generate spectral reset values also based on the number of consecutive missed or incorrect frames. The greater the prediction gain, the faster the decay should be. For example, consider a filter stability value of 0.5 and assume that the predictive gain is high, for example, gTNS = 6 , then a weakening factor could, for example, be 0.65 (= weakening fast). In contrast, again consider a filter stability value of 0.5, but assume that the predictive gain is low, for example 1.5, so a weakening factor might, for example, be 0.95 (= slow decay). The TNS prediction gain can also influence, and the values must be stored in the memory unit of a device to generate spectral reset values. If the gTNS prediction gain is less than a certain threshold (eg threshold = 5.0), then the spectral values after applying TNS are stored in the memory unit as previous spectral values. In case of a missing or incorrect frame, spectral reset values are generated based on these previous spectral values. Otherwise, if the gTNS prediction gain is greater than or equal to the threshold value, the spectral values before TNS application are stored in the memory unit as previous spectral values. In case of a missing or incorrect frame, spectral replacement values are generated based on these previous spectral values. TNS is not applied in any case on these previous spectral values. Accordingly, Figure 7 illustrates an audio signal decoder according to a corresponding application. The audio signal decoder comprises a decoding unit 710 for generating the first intermediate spectral values based on an error-free received frame. Furthermore, the audio signal decoder comprises a temporal noise shaping unit 720 for performing temporal noise shaping on the first intermediate spectral values to obtain a second set of intermediate spectral values. Furthermore, the audio signal decoder comprises a prediction gain calculator 730 for calculating a prediction gain of temporal noise shaping depending on the first intermediate spectral values and the second set of intermediate spectral values. Furthermore, the audio signal decoder comprises an apparatus 740 in accordance with one of the applications described above for generating spectral reset values when an actual audio frame has not been received or is incorrect. Furthermore, the audio signal decoder comprises a value selector 750 for storing the first intermediate spectral values in the memory unit 745 of the apparatus 740 to generate the spectral reset values, if the prediction gain is greater than or equal to a value. of threshold, or to store the second set of intermediate spectral values in memory unit 745 of apparatus 740 to generate the spectral reset values, if the prediction gain is less than the threshold value. The threshold value can, for example, be a predefined value. For example, the threshold value can be preset in the audio signal decoder. According to another application, suppression is conducted on the spectral data, after the first decoding step and before any noise filling, global gain and/or TNS is conducted. Such an application is shown in Figure 8. Figure 8 illustrates a decoder according to an additional application. The decoder comprises a first decoding module 810. The first decoding module 810 is adapted to generate generated spectral values based on an error-free received audio frame. The generated spectral values are then stored in the memory unit of an apparatus 820 for generating spectral reset values. In addition, the spectral values generated are input to a processing module 830, which processes the spectral values generated by conducting TNS, applying noise filling and/or applying a global gain to obtain spectral audio values from the signal. decoded audio. If a current frame is missing or incorrect, the apparatus 820 for generating spectral reset values generates the spectral reset values and feeds them to the processing module 830. According to the application illustrated in Figure 8, the decoding module or the processing module conducts some or all of the following steps, in case of suppression: The spectral values, for example, of the last good frame, are slightly modified by randomly inverting their sign. In an additional step, noise filling is performed based on the random noise in the spectral compartments quantified to zero. In another step, the noise factor is slightly adapted from the previously received error-free frame. In an additional step, the spectral noise modeling is obtained by applying the weighted spectral envelope encoded in LPC (LPC = Linear Predictive Coding) in the frequency domain. For example, the error-free LPC coefficients of the last received frame can be used. In an additional application, averaged LPC coefficients can be used. For example, an average of the last three considered LPC coefficient values of the last three error-free received frames can be generated for each of the LPC coefficients of a filter, and the average of the LPC coefficients can be applied. In a subsequent step, a fade can be applied to these spectral values. Fading may depend on the number of consecutive lost or incorrect frames and the stability of the previous LP filter. In addition, the prediction gain information can be used to influence the weakening. The greater the prediction gain, the faster the decay can be. The Figure 8 application is a little more complex than the Figure 6 application, but provides better audio quality. Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a characteristic of a method step. Similarly, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Depending on the requirements of certain implementations, the applications of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example, a Floppy, a DVD, a CD, a ROM, PROM, EPROM, EEPROM or a FLASH memory, having electronically readable control signals stored in it, which cooperate ( or are able to cooperate) with a programmable computer system so that the respective method is carried out. Some applications according to the invention comprise a data carrier with electronically readable control signals, which are capable of cooperating with a programmable computer system, in such a way that one of the methods described herein is carried out. Generally speaking, the applications of the present invention can be implemented as a computer program product with a program code, the program code being operative for carrying out one of the methods when the computer program product operates on a computer. Program code can, for example, be stored in a mechanically readable medium. Other applications include the computer program for executing one of the methods described herein, stored on a mechanically readable medium or on a non-transient storage medium. In other words, an application of the method of the invention is therefore a computer program with program code for performing one of the methods described herein, when the computer program is executed on a computer. A further application of the method of the invention is therefore a data carrier (either a digital storage medium or a computer readable medium) comprising, recorded thereon, the computer program for carrying out one of the methods described herein. A further application of the method of the invention is therefore a data stream or a sequence of signals representing the computer program for carrying out one of the methods described herein. The data stream or the sequence of signals can, for example, be configured to be transferred over a data communication connection, for example, over the Internet or radio channels. A further application comprises a processing means, for example a computer or a programmable logic device, configured for or adapted to perform one of the methods described herein. An additional application comprises a computer, having installed on it the computer program for executing one of the methods described herein. In some applications, a programmable logic device (eg, an array of field-programmable gates) may be used to perform some or all of the functionality of the methods described herein. In some applications, an array of field-programmable gates can cooperate with a microprocessor to perform one of the methods described herein. Generally speaking, the methods are preferably performed by any hardware device. The applications described above are merely illustrative for the principles of the present invention. It is understood that modifications and variations to the arrangements and details described herein will be apparent to others skilled in the art. It is intended, therefore, to be limited only by the scope of the pending patent claims and not by the specific details presented by way of description and explanation of the applications of the present invention.Literature[1]: 3GPP, "Audio codec processing functions; Extended Adaptive Multi- Rate - Wideband (AMR-WB+) codec; Transcoding functions", 2009, 3GPP TS 26.290.[2]: USAC codec (Unified Speech and Audio Codec), ISO/IEC CD 20303-3 dated September 24, 2010[3] : 3GPP, "Speech codec speech processing functions; Adaptive Multi-Rate - Wideband (AMR-WB) speech codec; Transcoding functions", 2009, V9.0.0, 3GPP TS 26.190.[4]: ISO/IEC 14496-3:2005: Information technology- Coding of audio-visual objects - Part 3: Audio, 2005[5]: ITU-T G.718 (06-2008) specification.
权利要求:
Claims (12) [0001] 1. An apparatus (100) for generating spectral reset values for an audio signal, characterized in that it comprises: a buffer unit (110) for storing the signals of previous spectral values relating to a free audio frame of previously received errors, and a suppression frame generator (120) for generating the spectral reset values when a current audio signal frame is not received or is incorrect, wherein the previously received error-free audio signal frame comprises filter information, the filter information having an associated filter stability value indicating a stability of a predictive filter defined by the information filter, and wherein the suppression frame generator (120) is adapted to generate the reset values spectral based on previous spectral values and based on the filter stability value. [0002] An apparatus (100) according to claim 1, characterized in that the suppression frame generator (120) is adapted to generate the spectral reset values by randomly inverting the sign of the previous spectral values. [0003] An apparatus (100) according to claim 1 or 2, characterized in that the suppression frame generator (120) is configured to generate the spectral reset values by multiplying each of the previous spectral values by a first gain factor, when the filter stability value has a first value, and multiplying each of the previous spectral values by a second gain factor, being less than the first gain factor, when the filter stability value has a second value that is less than the first value. [0004] An apparatus according to any one of the preceding claims, characterized in that the previously received error-free audio signal frame comprises the coefficients of the first predictive filter of the predictive filter, wherein a preceding frame of the free audio signal frame The previously received error rate comprises coefficients of the second prediction filter and wherein the stability value of the filter depends on the coefficients of the first prediction filter and the coefficients of the second prediction filter. [0005] An apparatus according to claim 4, characterized in that the filter stability value depends on a measured distance LSFdist, and in which the measured distance LSFdist is defined by the following formula: [0006] An apparatus (100) according to any one of the preceding claims, characterized in that the suppression frame generator (120) is adapted to generate the spectral reset values, also based on frame class information relating to the frame. previously received error-free audio signal, where the frame class information indicates that the previously received error-free audio frame is classified as "artificial start", "start", "voice transition", "voiceless transition ”, “without voice” or “with voice”. [0007] An apparatus (100) according to any one of the preceding claims, characterized in that the suppression frame generator (120) is adapted to generate the spectral reset values also based on a number of consecutive frames that were incorrect, a since a last error-free audio signal frame has arrived at the receiver, where no other error-free audio signal frame has arrived at the receiver once the last error-free audio signal frame has arrived at the receiver . [0008] An apparatus (100) according to claim 7, characterized in that the suppression frame generator (120) is adapted to calculate a decay factor based on the filter stability value and based on the number of consecutive frames that were incorrect, and in that the suppression frame generator (120) is adapted to generate the spectral reset values by multiplying the fade factor by at least some of the previous spectral values, or at least some values from a group of intermediate values, where each of the intermediate values depends on at least one of the previous spectral values. [0009] An audio signal decoder, comprising: an apparatus (610,710,810) for decoding spectral audio signal values, and an apparatus (620,740,820) for generating spectral reset values according to one of claims 1 to 8 , characterized by apparatus (610,71,810) for decoding the spectral audio signal values to be adapted to decode the spectral values of an error-free audio signal frame previously received from an audio signal as the spectral audio signal values , wherein the apparatus (610,710,810) for decoding spectral audio signal values is also adapted to store the spectral values of a previously received error-free audio signal frame in a memory unit of the apparatus (620,740,820) for the generation of spectral reset values, and in which the apparatus (620,740,820) for the generation of spectral reset values is adapted to generate the spectral reset values. ral based on the spectral values stored in the memory unit, when a current audio signal frame was not received or is incorrect. [0010] 10. An audio signal decoder according to claim 9, characterized in that: The apparatus for decoding is a decoding unit (710) for generating, as the spectral values of an audio signal frame free of previously received errors, the first intermediate spectral values based on an error-free received audio signal frame, wherein the apparatus (740) according to one of claims 1 to 8 is adapted to generate spectral substitution values when a frame of current audio signal has not been received or is wrong, and wherein the audio signal decoder further comprises: a temporal noise shaping unit (720) for performing temporal noise shaping to obtain a second set of intermediate spectral values ,a prediction gain calculator (730) to calculate a prediction gain of temporal noise modeling depending on the first intermediate spectral values and depending on the second set of intermediate spectral values, and a value selector (750) for storing the first intermediate spectral values in the memory unit (745) of the instrument (740) for generating spectral reset values, if the prediction gain is greater or equal to a threshold value or to store the second set of intermediate spectral values in the memory unit of the apparatus to generate spectral reset values, if the prediction gain is less than the threshold value. [0011] 11. An audio signal decoder according to claim 9, characterized in that the audio signal decoder further comprises a processing module (830) for processing the spectral audio signal values by performing temporal noise modeling, applying noise filling or applying a global gain to obtain spectral audio values from the decoded audio signal, and wherein the apparatus (820) for generating spectral reset values is adapted to generate the spectral reset values and feeding them to the processing module (830) when a current frame has not been received or is incorrect. [0012] 12. A method for generating spectral reset values for an audio signal, characterized by: storing previous spectral values relative to a previously received error-free audio signal frame, and generating the spectral reset values when a frame of current audio signal was not received or is incorrect, wherein the previously received error-free audio signal frame comprises filter information, the filter information having an associated filter stability value indicating a stability of a predictive filter defined by the filter information, where spectral reset values are generated based on the previous spectral values and based on the stability value of the filter.
类似技术:
公开号 | 公开日 | 专利标题 BR112013020324B1|2021-06-29|APPARATUS AND METHOD FOR ERROR SUPPRESSION IN LOW-DELAY UNIFIED SPEECH AND AUDIO CODING TWI553631B|2016-10-11|Apparatus and method for decoding an audio signal, and related computer program US8428938B2|2013-04-23|Systems and methods for reconstructing an erased speech frame JP6306177B2|2018-04-04|Audio decoder and decoded audio information providing method using error concealment to modify time domain excitation signal and providing decoded audio information MXPA04011751A|2005-06-08|Method and device for efficient frame erasure concealment in linear predictive based speech codecs. JP2016539360A|2016-12-15|Audio decoder for providing decoded audio information using error concealment based on time domain excitation signal and method for providing decoded audio information JP6687599B2|2020-04-22|Frame loss management in FD / LPD transition context BR112015031606B1|2021-12-14|DEVICE AND METHOD FOR IMPROVED SIGNAL FADING IN DIFFERENT DOMAINS DURING ERROR HIDING BR112015031177B1|2021-12-14|EQUIPMENT AND METHOD THAT PERFORM A FADING OF AN MDCT SPECTRUM IN WHITE NOISE BEFORE APPLICATION OF FDNS BR112015031343B1|2021-12-14|DEVICE AND METHOD THAT PERFORM IMPROVED CONCEPTS FOR TCX LTP
同族专利:
公开号 | 公开日 SG192734A1|2013-09-30| CA2827000C|2016-04-05| MY167853A|2018-09-26| JP2014506687A|2014-03-17| EP2661745A1|2013-11-13| TW201248616A|2012-12-01| BR112013020324A2|2018-07-10| ES2539174T3|2015-06-26| CA2827000A1|2012-08-23| WO2012110447A1|2012-08-23| JP5849106B2|2016-01-27| CN103620672B|2016-04-27| TWI484479B|2015-05-11| CN103620672A|2014-03-05| AU2012217215B2|2015-05-14| RU2630390C2|2017-09-07| KR101551046B1|2015-09-07| US9384739B2|2016-07-05| RU2013142135A|2015-03-27| AR085218A1|2013-09-18| MX2013009301A|2013-12-06| EP2661745B1|2015-04-08| PL2661745T3|2015-09-30| AU2012217215A1|2013-08-29| KR20140005277A|2014-01-14| HK1191130A1|2014-07-18| US20130332152A1|2013-12-12| ZA201306499B|2014-05-28|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 EP1107231B1|1991-06-11|2005-04-27|QUALCOMM Incorporated|Variable rate vocoder| US5408580A|1992-09-21|1995-04-18|Aware, Inc.|Audio compression system employing multi-rate signal analysis| SE501340C2|1993-06-11|1995-01-23|Ericsson Telefon Ab L M|Hiding transmission errors in a speech decoder| SE502244C2|1993-06-11|1995-09-25|Ericsson Telefon Ab L M|Method and apparatus for decoding audio signals in a system for mobile radio communication| BE1007617A3|1993-10-11|1995-08-22|Philips Electronics Nv|Transmission system using different codeerprincipes.| US5657422A|1994-01-28|1997-08-12|Lucent Technologies Inc.|Voice activity detection driven noise remediator| US5784532A|1994-02-16|1998-07-21|Qualcomm Incorporated|Application specific integrated circuit for performing rapid speech compression in a mobile telephone system| US5684920A|1994-03-17|1997-11-04|Nippon Telegraph And Telephone|Acoustic signal transform coding method and decoding method having a high efficiency envelope flattening method therein| US5568588A|1994-04-29|1996-10-22|Audiocodes Ltd.|Multi-pulse analysis speech processing System and method| KR100419545B1|1994-10-06|2004-06-04|코닌클리케 필립스 일렉트로닉스 엔.브이.|Transmission system using different coding principles| EP0720316B1|1994-12-30|1999-12-08|Daewoo Electronics Co., Ltd|Adaptive digital audio encoding apparatus and a bit allocation method thereof| SE506379C3|1995-03-22|1998-01-19|Ericsson Telefon Ab L M|Lpc speech encoder with combined excitation| JP3317470B2|1995-03-28|2002-08-26|日本電信電話株式会社|Audio signal encoding method and audio signal decoding method| US5659622A|1995-11-13|1997-08-19|Motorola, Inc.|Method and apparatus for suppressing noise in a communication system| US5848391A|1996-07-11|1998-12-08|Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V.|Method subband of coding and decoding audio signals using variable length windows| JP3259759B2|1996-07-22|2002-02-25|日本電気株式会社|Audio signal transmission method and audio code decoding system| JPH10124092A|1996-10-23|1998-05-15|Sony Corp|Method and device for encoding speech and method and device for encoding audible signal| US5960389A|1996-11-15|1999-09-28|Nokia Mobile Phones Limited|Methods for generating comfort noise during discontinuous transmission| JPH10214100A|1997-01-31|1998-08-11|Sony Corp|Voice synthesizing method| US6134518A|1997-03-04|2000-10-17|International Business Machines Corporation|Digital audio signal coding using a CELP coder and a transform coder| JP3223966B2|1997-07-25|2001-10-29|日本電気株式会社|Audio encoding / decoding device| US6070137A|1998-01-07|2000-05-30|Ericsson Inc.|Integrated frequency-domain voice coding using an adaptive spectral enhancement filter| EP0932141B1|1998-01-22|2005-08-24|Deutsche Telekom AG|Method for signal controlled switching between different audio coding schemes| GB9811019D0|1998-05-21|1998-07-22|Univ Surrey|Speech coders| US6173257B1|1998-08-24|2001-01-09|Conexant Systems, Inc|Completed fixed codebook for speech encoder| US6439967B2|1998-09-01|2002-08-27|Micron Technology, Inc.|Microelectronic substrate assembly planarizing machines and methods of mechanical and chemical-mechanical planarization of microelectronic substrate assemblies| SE521225C2|1998-09-16|2003-10-14|Ericsson Telefon Ab L M|Method and apparatus for CELP encoding / decoding| US6317117B1|1998-09-23|2001-11-13|Eugene Goff|User interface for the control of an audio spectrum filter processor| US7272556B1|1998-09-23|2007-09-18|Lucent Technologies Inc.|Scalable and embedded codec for speech and audio signals| US7124079B1|1998-11-23|2006-10-17|Telefonaktiebolaget Lm Ericsson |Speech coding with comfort noise variability feature for increased fidelity| FI114833B|1999-01-08|2004-12-31|Nokia Corp|A method, a speech encoder and a mobile station for generating speech coding frames| DE19921122C1|1999-05-07|2001-01-25|Fraunhofer Ges Forschung|Method and device for concealing an error in a coded audio signal and method and device for decoding a coded audio signal| AU5032000A|1999-06-07|2000-12-28|Ericsson Inc.|Methods and apparatus for generating comfort noise using parametric noise model statistics| JP4464484B2|1999-06-15|2010-05-19|パナソニック株式会社|Noise signal encoding apparatus and speech signal encoding apparatus| US6236960B1|1999-08-06|2001-05-22|Motorola, Inc.|Factorial packing method and apparatus for information coding| CN102592303B|2006-07-24|2015-03-11|索尼株式会社|A hair motion compositor system and optimization techniques for use in a hair/fur pipeline| US6636829B1|1999-09-22|2003-10-21|Mindspeed Technologies, Inc.|Speech communication system and method for handling lost frames| ES2269112T3|2000-02-29|2007-04-01|Qualcomm Incorporated|MULTIMODAL VOICE CODIFIER IN CLOSED LOOP OF MIXED DOMAIN.| US6757654B1|2000-05-11|2004-06-29|Telefonaktiebolaget Lm Ericsson|Forward error correction in speech coding| JP2002118517A|2000-07-31|2002-04-19|Sony Corp|Apparatus and method for orthogonal transformation, apparatus and method for inverse orthogonal transformation, apparatus and method for transformation encoding as well as apparatus and method for decoding| FR2813722B1|2000-09-05|2003-01-24|France Telecom|METHOD AND DEVICE FOR CONCEALING ERRORS AND TRANSMISSION SYSTEM COMPRISING SUCH A DEVICE| US6847929B2|2000-10-12|2005-01-25|Texas Instruments Incorporated|Algebraic codebook system and method| CA2327041A1|2000-11-22|2002-05-22|Voiceage Corporation|A method for indexing pulse positions and signs in algebraic codebooks for efficient coding of wideband signals| US20040142496A1|2001-04-23|2004-07-22|Nicholson Jeremy Kirk|Methods for analysis of spectral data and their applications: atherosclerosis/coronary heart disease| US7206739B2|2001-05-23|2007-04-17|Samsung Electronics Co., Ltd.|Excitation codebook search method in a speech coding system| US20020184009A1|2001-05-31|2002-12-05|Heikkinen Ari P.|Method and apparatus for improved voicing determination in speech signals containing high levels of jitter| US20030120484A1|2001-06-12|2003-06-26|David Wong|Method and system for generating colored comfort noise in the absence of silence insertion description packets| US6879955B2|2001-06-29|2005-04-12|Microsoft Corporation|Signal modification based on continuous time warping for low bit rate CELP coding| US6941263B2|2001-06-29|2005-09-06|Microsoft Corporation|Frequency domain postfiltering for quality enhancement of coded speech| DE10140507A1|2001-08-17|2003-02-27|Philips Corp Intellectual Pty|Method for the algebraic codebook search of a speech signal coder| US7711563B2|2001-08-17|2010-05-04|Broadcom Corporation|Method and system for frame erasure concealment for predictive speech coding based on extrapolation of speech waveform| KR100438175B1|2001-10-23|2004-07-01|엘지전자 주식회사|Search method for codebook| CA2365203A1|2001-12-14|2003-06-14|Voiceage Corporation|A signal modification method for efficient coding of speech signals| US6646332B2|2002-01-18|2003-11-11|Terence Quintin Collier|Semiconductor package device| CA2388352A1|2002-05-31|2003-11-30|Voiceage Corporation|A method and device for frequency-selective pitch enhancement of synthesized speed| CA2388358A1|2002-05-31|2003-11-30|Voiceage Corporation|A method and device for multi-rate lattice vector quantization| CA2388439A1|2002-05-31|2003-11-30|Voiceage Corporation|A method and device for efficient frame erasure concealment in linear predictive based speech codecs| US7302387B2|2002-06-04|2007-11-27|Texas Instruments Incorporated|Modification of fixed codebook search in G.729 Annex E audio coding| CN100492492C|2002-09-19|2009-05-27|松下电器产业株式会社|Audio decoding apparatus and method| EP1550108A2|2002-10-11|2005-07-06|Nokia Corporation|Methods and devices for source controlled variable bit-rate wideband speech coding| US7343283B2|2002-10-23|2008-03-11|Motorola, Inc.|Method and apparatus for coding a noise-suppressed audio signal| US7363218B2|2002-10-25|2008-04-22|Dilithium Networks Pty. Ltd.|Method and apparatus for fast CELP parameter mapping| KR100463419B1|2002-11-11|2004-12-23|한국전자통신연구원|Fixed codebook searching method with low complexity, and apparatus thereof| KR100465316B1|2002-11-18|2005-01-13|한국전자통신연구원|Speech encoder and speech encoding method thereof| KR20040058855A|2002-12-27|2004-07-05|엘지전자 주식회사|voice modification device and the method| US7249014B2|2003-03-13|2007-07-24|Intel Corporation|Apparatus, methods and articles incorporating a fast algebraic codebook search technique| US20050021338A1|2003-03-17|2005-01-27|Dan Graboi|Recognition device and system| WO2004090870A1|2003-04-04|2004-10-21|Kabushiki Kaisha Toshiba|Method and apparatus for encoding or decoding wide-band audio| US7318035B2|2003-05-08|2008-01-08|Dolby Laboratories Licensing Corporation|Audio coding systems and methods using spectral component coupling and spectral component regeneration| EP1642265B1|2003-06-30|2010-10-27|Koninklijke Philips Electronics N.V.|Improving quality of decoded audio by adding noise| CA2475283A1|2003-07-17|2005-01-17|Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Industry Through The Communications Research Centre|Method for recovery of lost speech data| US20050091044A1|2003-10-23|2005-04-28|Nokia Corporation|Method and system for pitch contour quantization in audio coding| US20050091041A1|2003-10-23|2005-04-28|Nokia Corporation|Method and system for speech coding| KR101217649B1|2003-10-30|2013-01-02|돌비 인터네셔널 에이비|audio signal encoding or decoding| SE527669C2|2003-12-19|2006-05-09|Ericsson Telefon Ab L M|Improved error masking in the frequency domain| DE102004007200B3|2004-02-13|2005-08-11|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Device for audio encoding has device for using filter to obtain scaled, filtered audio value, device for quantizing it to obtain block of quantized, scaled, filtered audio values and device for including information in coded signal| US20070147518A1|2005-02-18|2007-06-28|Bruno Bessette|Methods and devices for low-frequency emphasis during audio compression based on ACELP/TCX| CA2457988A1|2004-02-18|2005-08-18|Voiceage Corporation|Methods and devices for audio compression based on acelp/tcx coding and multi-rate lattice vector quantization| FI118834B|2004-02-23|2008-03-31|Nokia Corp|Classification of audio signals| FI118835B|2004-02-23|2008-03-31|Nokia Corp|Select end of a coding model| EP1722359B1|2004-03-05|2011-09-07|Panasonic Corporation|Error conceal device and error conceal method| EP1852851A1|2004-04-01|2007-11-07|Beijing Media Works Co., Ltd|An enhanced audio encoding/decoding device and method| GB0408856D0|2004-04-21|2004-05-26|Nokia Corp|Signal encoding| AT457512T|2004-05-17|2010-02-15|Nokia Corp|AUDIOCODING WITH DIFFERENT CODING FRAME LENGTHS| US7649988B2|2004-06-15|2010-01-19|Acoustic Technologies, Inc.|Comfort noise generator using modified Doblinger noise estimate| US7630902B2|2004-09-17|2009-12-08|Digital Rise Technology Co., Ltd.|Apparatus and methods for digital audio coding using codebook application ranges| KR100656788B1|2004-11-26|2006-12-12|한국전자통신연구원|Code vector creation method for bandwidth scalable and broadband vocoder using it| TWI253057B|2004-12-27|2006-04-11|Quanta Comp Inc|Search system and method thereof for searching code-vector of speech signal in speech encoder| BRPI0607246B1|2005-01-31|2019-12-03|Skype|method for generating a sequence of masking samples with respect to the transmission of a digitized audio signal, program storage device, and arrangement for receiving a digitized audio signal| US7519535B2|2005-01-31|2009-04-14|Qualcomm Incorporated|Frame erasure concealment in voice communications| US8155965B2|2005-03-11|2012-04-10|Qualcomm Incorporated|Time warping frames inside the vocoder by modifying the residual| TWI319565B|2005-04-01|2010-01-11|Qualcomm Inc|Methods, and apparatus for generating highband excitation signal| US8577686B2|2005-05-26|2013-11-05|Lg Electronics Inc.|Method and apparatus for decoding an audio signal| US7707034B2|2005-05-31|2010-04-27|Microsoft Corporation|Audio codec post-filter| RU2296377C2|2005-06-14|2007-03-27|Михаил Николаевич Гусев|Method for analysis and synthesis of speech| EP1897085B1|2005-06-18|2017-05-31|Nokia Technologies Oy|System and method for adaptive transmission of comfort noise parameters during discontinuous speech transmission| KR100851970B1|2005-07-15|2008-08-12|삼성전자주식회사|Method and apparatus for extracting ISCImportant Spectral Component of audio signal, and method and appartus for encoding/decoding audio signal with low bitrate using it| US7610197B2|2005-08-31|2009-10-27|Motorola, Inc.|Method and apparatus for comfort noise generation in speech communication systems| RU2312405C2|2005-09-13|2007-12-10|Михаил Николаевич Гусев|Method for realizing machine estimation of quality of sound signals| US7953605B2|2005-10-07|2011-05-31|Deepen Sinha|Method and apparatus for audio encoding and decoding using wideband psychoacoustic modeling and bandwidth extension| US7720677B2|2005-11-03|2010-05-18|Coding Technologies Ab|Time warped modified transform coding of audio signals| US7536299B2|2005-12-19|2009-05-19|Dolby Laboratories Licensing Corporation|Correlating and decorrelating transforms for multiple description coding systems| US8255207B2|2005-12-28|2012-08-28|Voiceage Corporation|Method and device for efficient frame erasure concealment in speech codecs| WO2007080211A1|2006-01-09|2007-07-19|Nokia Corporation|Decoding of binaural audio signals| US20110057818A1|2006-01-18|2011-03-10|Lg Electronics, Inc.|Apparatus and Method for Encoding and Decoding Signal| CN101371295B|2006-01-18|2011-12-21|Lg电子株式会社|Apparatus and method for encoding and decoding signal| US8032369B2|2006-01-20|2011-10-04|Qualcomm Incorporated|Arbitrary average data rates for variable rate coders| US7668304B2|2006-01-25|2010-02-23|Avaya Inc.|Display hierarchy of participants during phone call| US8160274B2|2006-02-07|2012-04-17|Bongiovi Acoustics Llc.|System and method for digital signal processing| FR2897733A1|2006-02-20|2007-08-24|France Telecom|Echo discriminating and attenuating method for hierarchical coder-decoder, involves attenuating echoes based on initial processing in discriminated low energy zone, and inhibiting attenuation of echoes in false alarm zone| FR2897977A1|2006-02-28|2007-08-31|France Telecom|Coded digital audio signal decoder`s e.g. G.729 decoder, adaptive excitation gain limiting method for e.g. voice over Internet protocol network, involves applying limitation to excitation gain if excitation gain is greater than given value| US20070253577A1|2006-05-01|2007-11-01|Himax Technologies Limited|Equalizer bank with interference reduction| US7873511B2|2006-06-30|2011-01-18|Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.|Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic| JP4810335B2|2006-07-06|2011-11-09|株式会社東芝|Wideband audio signal encoding apparatus and wideband audio signal decoding apparatus| JP5052514B2|2006-07-12|2012-10-17|パナソニック株式会社|Speech decoder| US8255213B2|2006-07-12|2012-08-28|Panasonic Corporation|Speech decoding apparatus, speech encoding apparatus, and lost frame concealment method| US7933770B2|2006-07-14|2011-04-26|Siemens Audiologische Technik Gmbh|Method and device for coding audio data based on vector quantisation| US7987089B2|2006-07-31|2011-07-26|Qualcomm Incorporated|Systems and methods for modifying a zero pad region of a windowed frame of an audio signal| EP2054879B1|2006-08-15|2010-01-20|Broadcom Corporation|Re-phasing of decoder states after packet loss| US7877253B2|2006-10-06|2011-01-25|Qualcomm Incorporated|Systems, methods, and apparatus for frame erasure recovery| DE102006049154B4|2006-10-18|2009-07-09|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Coding of an information signal| RU2411645C2|2006-10-25|2011-02-10|Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф.|Device and method to generate values of subbands of sound signal and device and method to generate audio counts of time area| CN102682775B|2006-11-10|2014-10-08|松下电器(美国)知识产权公司|Parameter encoding device and parameter decoding method| KR101016224B1|2006-12-12|2011-02-25|프라운호퍼-게젤샤프트 추르 푀르데룽 데어 안제반텐 포르슝 에 파우|Encoder, decoder and methods for encoding and decoding data segments representing a time-domain data stream| FR2911228A1|2007-01-05|2008-07-11|France Telecom|TRANSFORMED CODING USING WINDOW WEATHER WINDOWS.| KR101379263B1|2007-01-12|2014-03-28|삼성전자주식회사|Method and apparatus for decoding bandwidth extension| FR2911426A1|2007-01-15|2008-07-18|France Telecom|MODIFICATION OF A SPEECH SIGNAL| US7873064B1|2007-02-12|2011-01-18|Marvell International Ltd.|Adaptive jitter buffer-packet loss concealment| JP4708446B2|2007-03-02|2011-06-22|パナソニック株式会社|Encoding device, decoding device and methods thereof| JP5596341B2|2007-03-02|2014-09-24|パナソニックインテレクチュアルプロパティコーポレーションオブアメリカ|Speech coding apparatus and speech coding method| RU2462770C2|2007-03-02|2012-09-27|Панасоник Корпорэйшн|Coding device and coding method| JP2008261904A|2007-04-10|2008-10-30|Matsushita Electric Ind Co Ltd|Encoding device, decoding device, encoding method and decoding method| US8630863B2|2007-04-24|2014-01-14|Samsung Electronics Co., Ltd.|Method and apparatus for encoding and decoding audio/speech signal| US9653088B2|2007-06-13|2017-05-16|Qualcomm Incorporated|Systems, methods, and apparatus for signal encoding using pitch-regularizing and non-pitch-regularizing coding| KR101513028B1|2007-07-02|2015-04-17|엘지전자 주식회사|broadcasting receiver and method of processing broadcast signal| US8185381B2|2007-07-19|2012-05-22|Qualcomm Incorporated|Unified filter bank for performing signal conversions| CN101110214B|2007-08-10|2011-08-17|北京理工大学|Speech coding method based on multiple description lattice type vector quantization technology| US8428957B2|2007-08-24|2013-04-23|Qualcomm Incorporated|Spectral noise shaping in audio coding based on spectral dynamics in frequency sub-bands| BRPI0816136B1|2007-08-27|2020-03-03|Telefonaktiebolaget Lm Ericsson |METHOD AND DEVICE FOR SIGNAL PROCESSING| JP4886715B2|2007-08-28|2012-02-29|日本電信電話株式会社|Steady rate calculation device, noise level estimation device, noise suppression device, method thereof, program, and recording medium| WO2009033288A1|2007-09-11|2009-03-19|Voiceage Corporation|Method and device for fast algebraic codebook search in speech and audio coding| CN100524462C|2007-09-15|2009-08-05|华为技术有限公司|Method and apparatus for concealing frame error of high belt signal| CN101388210B|2007-09-15|2012-03-07|华为技术有限公司|Coding and decoding method, coder and decoder| US8576096B2|2007-10-11|2013-11-05|Motorola Mobility Llc|Apparatus and method for low complexity combinatorial coding of signals| KR101373004B1|2007-10-30|2014-03-26|삼성전자주식회사|Apparatus and method for encoding and decoding high frequency signal| CN101425292B|2007-11-02|2013-01-02|华为技术有限公司|Decoding method and device for audio signal| DE102007055830A1|2007-12-17|2009-06-18|Zf Friedrichshafen Ag|Method and device for operating a hybrid drive of a vehicle| CN101483043A|2008-01-07|2009-07-15|中兴通讯股份有限公司|Code book index encoding method based on classification, permutation and combination| CN101488344B|2008-01-16|2011-09-21|华为技术有限公司|Quantitative noise leakage control method and apparatus| DE102008015702B4|2008-01-31|2010-03-11|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Apparatus and method for bandwidth expansion of an audio signal| US8000487B2|2008-03-06|2011-08-16|Starkey Laboratories, Inc.|Frequency translation by high-frequency spectral envelope warping in hearing assistance devices| FR2929466A1|2008-03-28|2009-10-02|France Telecom|DISSIMULATION OF TRANSMISSION ERROR IN A DIGITAL SIGNAL IN A HIERARCHICAL DECODING STRUCTURE| EP2107556A1|2008-04-04|2009-10-07|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Audio transform coding using pitch correction| US8879643B2|2008-04-15|2014-11-04|Qualcomm Incorporated|Data substitution scheme for oversampled data| US8768690B2|2008-06-20|2014-07-01|Qualcomm Incorporated|Coding scheme selection for low-bit-rate applications| PL2346029T3|2008-07-11|2013-11-29|Fraunhofer Ges Forschung|Audio encoder, method for encoding an audio signal and corresponding computer program| EP3640941A1|2008-10-08|2020-04-22|Fraunhofer Gesellschaft zur Förderung der Angewand|Multi-resolution switched audio encoding/decoding scheme| CA2871372C|2008-07-11|2016-08-23|Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V.|Audio encoder and decoder for encoding and decoding audio samples| ES2683077T3|2008-07-11|2018-09-24|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Audio encoder and decoder for encoding and decoding frames of a sampled audio signal| EP2144230A1|2008-07-11|2010-01-13|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Low bitrate audio encoding/decoding scheme having cascaded switches| PL2311033T3|2008-07-11|2012-05-31|Fraunhofer Ges Forschung|Providing a time warp activation signal and encoding an audio signal therewith| ES2401487T3|2008-07-11|2013-04-22|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Apparatus and procedure for encoding / decoding an audio signal using a foreign signal generation switching scheme| MY154452A|2008-07-11|2015-06-15|Fraunhofer Ges Forschung|An apparatus and a method for decoding an encoded audio signal| US8352279B2|2008-09-06|2013-01-08|Huawei Technologies Co., Ltd.|Efficient temporal envelope coding approach by prediction between low band signal and high band signal| WO2010031049A1|2008-09-15|2010-03-18|GH Innovation, Inc.|Improving celp post-processing for music signals| US8798776B2|2008-09-30|2014-08-05|Dolby International Ab|Transcoding of audio metadata| DE102008042579B4|2008-10-02|2020-07-23|Robert Bosch Gmbh|Procedure for masking errors in the event of incorrect transmission of voice data| KR101315617B1|2008-11-26|2013-10-08|광운대학교 산학협력단|Unified speech/audio coder processing windows sequence based mode switching| CN101770775B|2008-12-31|2011-06-22|华为技术有限公司|Signal processing method and device| SG172976A1|2009-01-16|2011-08-29|Dolby Int Ab|Cross product enhanced harmonic transposition| US8457975B2|2009-01-28|2013-06-04|Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.|Audio decoder, audio encoder, methods for decoding and encoding an audio signal and computer program| KR101316979B1|2009-01-28|2013-10-11|프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베.|Audio Coding| EP2214165A3|2009-01-30|2010-09-15|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Apparatus, method and computer program for manipulating an audio signal comprising a transient event| EP2398017B1|2009-02-16|2014-04-23|Electronics and Telecommunications Research Institute|Encoding/decoding method for audio signals using adaptive sinusoidal coding and apparatus thereof| ES2374486T3|2009-03-26|2012-02-17|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|DEVICE AND METHOD FOR HANDLING AN AUDIO SIGNAL.| KR20100115215A|2009-04-17|2010-10-27|삼성전자주식회사|Apparatus and method for audio encoding/decoding according to variable bit rate| ES2673637T3|2009-06-23|2018-06-25|Voiceage Corporation|Prospective cancellation of time domain overlap with weighted or original signal domain application| CN101958119B|2009-07-16|2012-02-29|中兴通讯股份有限公司|Audio-frequency drop-frame compensator and compensation method for modified discrete cosine transform domain| CN102884574B|2009-10-20|2015-10-14|弗兰霍菲尔运输应用研究公司|Audio signal encoder, audio signal decoder, use aliasing offset the method by audio-frequency signal coding or decoding| BR122020024243B1|2009-10-20|2022-02-01|Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E. V.|Audio signal encoder, audio signal decoder, method of providing an encoded representation of an audio content and a method of providing a decoded representation of an audio content.| CA2862715C|2009-10-20|2017-10-17|Ralf Geiger|Multi-mode audio codec and celp coding adapted therefore| CN102081927B|2009-11-27|2012-07-18|中兴通讯股份有限公司|Layering audio coding and decoding method and system| US8428936B2|2010-03-05|2013-04-23|Motorola Mobility Llc|Decoder for audio signal including generic audio and speech frames| US8423355B2|2010-03-05|2013-04-16|Motorola Mobility Llc|Encoder for audio signal including generic audio and speech frames| US8793126B2|2010-04-14|2014-07-29|Huawei Technologies Co., Ltd.|Time/frequency two dimension post-processing| TW201214415A|2010-05-28|2012-04-01|Fraunhofer Ges Forschung|Low-delay unified speech and audio codec| EP2676262B1|2011-02-14|2018-04-25|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Noise generation in audio codecs| ES2529025T3|2011-02-14|2015-02-16|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Apparatus and method for processing a decoded audio signal in a spectral domain|WO2013058635A2|2011-10-21|2013-04-25|삼성전자 주식회사|Method and apparatus for concealing frame errors and method and apparatus for audio decoding| US9741350B2|2013-02-08|2017-08-22|Qualcomm Incorporated|Systems and methods of performing gain control| ES2639127T3|2013-06-21|2017-10-25|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Apparatus and procedure that performs a fading of an MDCT spectrum to white noise before the application of FDNS| CN108364657B|2013-07-16|2020-10-30|超清编解码有限公司|Method and decoder for processing lost frame| EP3355306B1|2013-10-31|2021-11-24|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Audio decoder and method for providing a decoded audio information using an error concealment modifying a time domain excitation signal| SG11201603429SA|2013-10-31|2016-05-30|Fraunhofer Ges Zur Förderung Der Angewandten Forschung E V|Audio decoder and method for providing a decoded audio information using an error concealment based on a time domain excitation signal| EP3063761B1|2013-10-31|2017-11-22|Fraunhofer Gesellschaft zur Förderung der angewandten Forschung E.V.|Audio bandwidth extension by insertion of temporal pre-shaped noise in frequency domain| EP2922054A1|2014-03-19|2015-09-23|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Apparatus, method and corresponding computer program for generating an error concealment signal using an adaptive noise estimation| EP2922055A1|2014-03-19|2015-09-23|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Apparatus, method and corresponding computer program for generating an error concealment signal using individual replacement LPC representations for individual codebook information| EP2922056A1|2014-03-19|2015-09-23|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Apparatus, method and corresponding computer program for generating an error concealment signal using power compensation| WO2015174912A1|2014-05-15|2015-11-19|Telefonaktiebolaget L M Ericsson |Audio signal classification and coding| CN106683681B|2014-06-25|2020-09-25|华为技术有限公司|Method and device for processing lost frame| EP2980790A1|2014-07-28|2016-02-03|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Apparatus and method for comfort noise generation mode selection| PL3000110T3|2014-07-28|2017-05-31|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Selection of one of a first encoding algorithm and a second encoding algorithm using harmonics reduction| ES2870959T3|2016-03-07|2021-10-28|Fraunhofer Ges Forschung|Error concealment unit, audio decoder and related method, and computer program using characteristics of a decoded representation of a properly decoded audio frame| KR20180037852A|2016-10-05|2018-04-13|삼성전자주식회사|Image processing apparatus and control method thereof| KR20200097594A|2019-02-08|2020-08-19|김승현|Flexible,Focus,Free cleaner| WO2020165262A2|2019-02-13|2020-08-20|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Audio transmitter processor, audio receiver processor and related methods and computer programs| CN112992160B|2021-05-08|2021-07-27|北京百瑞互联技术有限公司|Audio error concealment method and device|
法律状态:
2018-12-18| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2019-09-17| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-04-20| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-06-29| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 13/02/2012, OBSERVADAS AS CONDICOES LEGAIS. | 2022-02-08| B16C| Correction of notification of the grant [chapter 16.3 patent gazette]|Free format text: REFERENTE AO DESPACHO 16.1 PUBLICADO NA RPI 2634 DE 29.06.2021, QUANTO AO TITULAR |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201161442632P| true| 2011-02-14|2011-02-14| US61/442,632|2011-02-14| PCT/EP2012/052395|WO2012110447A1|2011-02-14|2012-02-13|Apparatus and method for error concealment in low-delay unified speech and audio coding | 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|